3 results
233 Transforming the Academy of Community Reviewers (ACR) course into an E-Learning course in the Post COVID-19 Pandemic Era
- Part of
- Jasmine Neal, Carolynn T. Jones, Tanya Mathews, Virginia Macias, Sapna Varia, Valerie Snavely
-
- Journal:
- Journal of Clinical and Translational Science / Volume 8 / Issue s1 / April 2024
- Published online by Cambridge University Press:
- 03 April 2024, p. 70
-
- Article
-
- You have access Access
- Open access
- Export citation
-
OBJECTIVES/GOALS: The objectives of the Academy of Community Reviewers (ACR) serve to: (1) provide comprehensive education and training to community members about clinical research and the community review process for clinical research grants; and (2) collaborate with the community in the development of the training to ensure benefice and meaningful engagement. METHODS/STUDY POPULATION: This training targets community members who will serve as future grant reviewers for The Ohio State University Center for Clinical Translational Science (CCTS) pilot grant submissions, other grant submissions and as expert consultants on other projects needing community perspectives. In 2019 and 2020, this training was offered as a live session. Thirty-eight community reviewers were trained and have served as grant reviewers and consultants on over 70 projects. Based on feedback of former graduates, time demands, logistics, and technology advances warranted transitioning the course to an online learning platform. ACR graduates were consulted in course redesign and updates. Course revisions include material on DEIA, implicit bias and health equity in clinical research with narrated lectures. RESULTS/ANTICIPATED RESULTS: Each of the 7 modules (including a total of 15 submodules) will have a brief summary knowledge check. The module “How to incorporate diversity, equity, inclusion and accessibility in health research” will invite trainees to independently explore their own social identity and biases through a guided exercise. The last (7th) module will have interactive opportunities for submitting grant reviews and participation in an online grant review session, geared to The Ohio State University CCTS. ACR graduates have been invited to consult on educational material and pilot the new course. Demographic, knowledge assessments and module evaluations will be collected. An overall course evaluation and focus group interviews with graduates will also be analyzed for quality improvement and contributions in grant reviews. DISCUSSION/SIGNIFICANCE: The increased accessibility of the ACR course will foster more inclusive community engagement and support the development of clinical and translational research that is innovative, efficient, equitable, and relevant to its beneficiaries. This in depth community reviewer training has been designed to be used and customized to other CTSA Hubs.
The CTSA External Reviewer Exchange Consortium (CEREC): Engagement and efficacy
- Margaret Schneider, April Bagaporo, Jennifer A. Croker, Adam Davidson, Pam Dillon, Aileen Dinkjian, Madeline Gibson, Nia Indelicato, Amy J. Jenkins, Tanya Mathew, Renee McCoy, Hardeep Ranu, Kai Zheng
-
- Journal:
- Journal of Clinical and Translational Science / Volume 3 / Issue 6 / December 2019
- Published online by Cambridge University Press:
- 02 October 2019, pp. 325-331
-
- Article
-
- You have access Access
- Open access
- HTML
- Export citation
-
Introduction:
Many institutions evaluate applications for local seed funding by recruiting peer reviewers from their own institutional community. Smaller institutions, however, often face difficulty locating qualified local reviewers who are not in conflict with the proposal. As a larger pool of reviewers may be accessed through a cross-institutional collaborative process, nine Clinical and Translational Science Award (CTSA) hubs formed a consortium in 2016 to facilitate reviewer exchanges. Data were collected to evaluate the feasibility and preliminary efficacy of the consortium.
Methods:The CTSA External Reviewer Exchange Consortium (CEREC) has been supported by a custom-built web-based application that facilitates the process and tracks the efficiency and productivity of the exchange.
Results:All nine of the original CEREC members remain actively engaged in the exchange. Between January 2017 and May 2019, CEREC supported the review process for 23 individual calls for proposals. Out of the 412 reviews requested, 368 were received, for a fulfillment ratio of 89.3%. The yield on reviewer invitations has remained consistently high, with approximately one-third of invitations being accepted, and of the reviewers who agreed to provide a review, 88.3% submitted a complete review. Surveys of reviewers and pilot program administrators indicate high satisfaction with the process.
Conclusions:These data indicate that a reviewer exchange consortium is feasible, adds value to participating partners, and is sustainable over time.
2286 A CTSA External Reviewer Exchange Consortium: Description and lessons learned
- Margaret Schneider, Tanya Mathew, Madeline Gibson, Christine Zeller, Hardeep Ranu, Adam Davidson, Pamela Dillon, Nia Indelicato, Aileen Dinkjian
-
- Journal:
- Journal of Clinical and Translational Science / Volume 2 / Issue S1 / June 2018
- Published online by Cambridge University Press:
- 21 November 2018, p. 2
-
- Article
-
- You have access Access
- Open access
- Export citation
-
OBJECTIVES/SPECIFIC AIMS: To share the experience gained and lessons learned from a cross CTSA collaborative effort to improve the review process for Pilot Studies awards by exchanging external reviewers. METHODS/STUDY POPULATION: The CEREC process is managed by a web-based tracking system that enables all participating members to view at any time the status of reviewer invitations. This online tracking system is supplemented by monthly conference calls during which new calls for proposals are announced and best practices are identified. Each CTSA hub customized the CEREC model based on their individual pilot program needs and review process. Some hubs have supplemented their internal reviews by only posting proposals on CEREC that lack reviewers with significant expertise within their institutions. Other hubs have requested 1–3 external reviewers for each of their proposals or a selection of most promising proposals. In anticipation of potential scoring discrepancies, several hubs added a self-assessment of reviewer expertise and confidence at the end of each review. If a proposal is on the cusp of fundability, then the reviewers’ self-assessment may be taken into account. In addition to the tracking data collected by the online system, a survey of CEREC reviewers was conducted using Qualtrics. RESULTS/ANTICIPATED RESULTS: Across the 144 proposals submitted for reviews, CEREC members issued a total of 396 email invitations to potential reviewers. The number of invitations required to yield a reviewer ranged from 1 to 17. A total of 224 invitations were accepted, for a response rate of 56%. An external reviewer was unable to be located for 5 proposals (3%). Ultimately, 196 completed reviews were submitted, for a completion rate of 87%. The most common reasons for non-completion after acceptance of an invitation included reviewer illness and discovery of a conflict of interest. CEREC members found the process extremely useful for locating qualified reviewers who were not in conflict with the proposal being reviewed and for identifying reviewers for proposals related to highly specialized topics. The survey of CEREC reviewers found that they generally found the process easy to navigate and intellectually rewarding. Most would be willing to review additional CEREC proposals in the future. External reviewer comments and scores were generally in agreement with internal reviewer comments and scores. Thus, hubs could factor in external reviewer scores equally to internal reviewer scores, without feeling compelled to calibrate external reviewer scores. Overall, through CEREC external reviewers, mainly due to the stronger matching of scientific expertise and reduction of potential bias, the quality of reviews appear to be higher and more pertinent. DISCUSSION/SIGNIFICANCE OF IMPACT: Some aspects of the process emerged that will be addressed in the future to make the system more efficient. One issue that arose was the burden on the system during multiple simultaneous calls for proposals. Future plans call for harmonizing review cycles to avoid these overlaps. Efficiency also will be improved by optimizing the timing of reviewer invitations to minimize the probability of obtaining more reviews than requested. In addition to the original objective of CEREC, the collaboration has led to additional exchange of information regarding methods and processes related to running the Pilot Funding programs. For example, one site developed a method using REDCap to manage their reviewer database; an innovation that is being shared with the other CEREC partners. Another site has a well-developed process for integrating community reviewers into their review process and is sharing their training materials with the remaining CEREC partners.